Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prod Release 19/12 #457

Merged
merged 10 commits into from
Dec 20, 2023
Merged

Prod Release 19/12 #457

merged 10 commits into from
Dec 20, 2023

Conversation

morgsmccauley
Copy link
Collaborator

@morgsmccauley morgsmccauley commented Dec 19, 2023

darunrs and others added 9 commits November 29, 2023 13:13
We are seeing errors in prod relating to javascript running out of
memory due to the heap growing. Most likely culprit is the block
prefetch. There may be some characteristics in prod that cause this
error, but not in dev. Failures cause the entire machine to crash and
restart.

We'll reduce the prefetch queue size, which is likely the largest memory eater in the worker, to hopefully avoid this worker heap memory issue.
…ts (#438)

It was observed that there are three related bugs in stream message
processing:

1. In some cases, the stream will increment the sequence and not the
main number, causing incrementId to skip messages.
2. The current stream ID is used when calling xRange, giving an
inaccurate count of messages if any skips occur.
3. Unprocessed messages are no longer reported when the queue of blocks
ie empty. This occasionally led to a non-zero message count being
scraped and then left that way even when the stream was in fact zero.

In addition, xRange takes a significant amount of time to run depending
on the size of the queue. This impacts all other operations since redis
is single-threaded. xLen takes much less time to call while also
accurately returning the count of messages in the stream.


To resolve the bugs, I now increment the sequence instead of the main
ID. In addition, if no more stream messages are being fetched, the
stream message ID is reset to '0' to ensure new messages are collected
regardless of their ID. Finally, I replaced xRange with xLen. I also
changed a blocking while loop to an if statement in consumer so that if
the queue is empty, it continues, which triggers the finally.

I made some small additions to logging statements to include the indexer
type and block number for success/failure to help diagnose problems with
indexers in the future.
This PR introduces a new Rust service called Block Streamer. This is
essentially a refactored version of Historical Backfill, which does not
stop the 'manual filtering' portion of the backfill, continuing
indefinitely. Eventually, this service will be able to serve requests,
allowing control over these Block Streams. I've outlined the major
changes below.

## `trait S3Client`
Low level S3 requests have been abstracted behind a `trait`. This allows
us to use the `trait` as an argument, allowing the injection of mock
implementations in tests. We no longer need to use real access keys, and
can make more concrete assertions.

## `DeltaLakeClient`
The majority of the S3 related logic, `queryapi_coordinator/src/s3.rs`
included, has been refactored in to `DeltaLakeClient`. This client
encapsulates all logic relating the interaction/consumption of
`near-delta-lake` S3. Now, only a single method needs to be called from
`BlockStream` in order to get the relevant block heights for a given
contract_id/pattern. There's been a fair amount of refactor across all
the methods themselves, I've added tests to ensure the still behave as
expected.

## `BlockStream`
The refactored version of Historical Backfill, not much has changed here
in this version, the main change is that it now used `DeltaLakeClient`.
This will eventually be expanded to provide more control over the
Stream.

## `indexer_rules_*` & `storage`
Currently these exist as separate crates within the `indexer/`
workspace. Rather than creating a workspace for `block-streamer` I've
added the respective files for each to the single crate. This is
probably fine for `storage`, now called `redis`, but given
`indexer_rules_*` is also used in the Registry Contract, I'll probably
extract this in a later PR, to avoid it being duplicated.
<p>This PR was automatically created by Snyk using the credentials of a
real user.</p><br />Keeping your Docker base image up-to-date means
you’ll benefit from security fixes in the latest version of your chosen
image.

#### Changes included in this PR 


- runner/Dockerfile

We recommend upgrading to `node:18.18.2`, as this image has only 155
known vulnerabilities. To do this, merge this pull request, then verify
your application still works as expected.



Some of the most important vulnerabilities in your base image include:

| Severity | Priority Score / 1000 | Issue | Exploit Maturity |
| :------: | :-------------------- | :---- | :--------------- |
| ![high
severity](https://res.cloudinary.com/snyk/image/upload/w_20,h_20/v1561977819/icon/h.png
"high severity") | **852** | Out-of-bounds Write
<br/>[SNYK-DEBIAN12-LIBWEBP-5893095](https://snyk.io/vuln/SNYK-DEBIAN12-LIBWEBP-5893095)
| Mature |
| ![high
severity](https://res.cloudinary.com/snyk/image/upload/w_20,h_20/v1561977819/icon/h.png
"high severity") | **852** | Out-of-bounds Write
<br/>[SNYK-DEBIAN12-LIBWEBP-5893095](https://snyk.io/vuln/SNYK-DEBIAN12-LIBWEBP-5893095)
| Mature |
| ![high
severity](https://res.cloudinary.com/snyk/image/upload/w_20,h_20/v1561977819/icon/h.png
"high severity") | **852** | Out-of-bounds Write
<br/>[SNYK-DEBIAN12-LIBWEBP-5893095](https://snyk.io/vuln/SNYK-DEBIAN12-LIBWEBP-5893095)
| Mature |
| ![high
severity](https://res.cloudinary.com/snyk/image/upload/w_20,h_20/v1561977819/icon/h.png
"high severity") | **852** | Out-of-bounds Write
<br/>[SNYK-DEBIAN12-LIBWEBP-5893095](https://snyk.io/vuln/SNYK-DEBIAN12-LIBWEBP-5893095)
| Mature |
| ![high
severity](https://res.cloudinary.com/snyk/image/upload/w_20,h_20/v1561977819/icon/h.png
"high severity") | **600** | Resource Exhaustion
<br/>[SNYK-DEBIAN12-NGHTTP2-5953379](https://snyk.io/vuln/SNYK-DEBIAN12-NGHTTP2-5953379)
| Mature |



---

**Note:** _You are seeing this because you or someone else with access
to this repository has authorized Snyk to open fix PRs._

For more information: <img
src="https://api.segment.io/v1/pixel/track?data=eyJ3cml0ZUtleSI6InJyWmxZcEdHY2RyTHZsb0lYd0dUcVg4WkFRTnNCOUEwIiwiYW5vbnltb3VzSWQiOiJkMmYyNDNjMS03YzEzLTQ4YjEtYTQwYS02OTM4ZDI1MmIzNjYiLCJldmVudCI6IlBSIHZpZXdlZCIsInByb3BlcnRpZXMiOnsicHJJZCI6ImQyZjI0M2MxLTdjMTMtNDhiMS1hNDBhLTY5MzhkMjUyYjM2NiJ9fQ=="
width="0" height="0"/>
🧐 [View latest project
report](https://app.snyk.io/org/pagoda-pilot/project/7b2bf28a-8d8c-4d66-bba4-4e997309ba53?utm_source&#x3D;github-enterprise&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr)

🛠 [Adjust project
settings](https://app.snyk.io/org/pagoda-pilot/project/7b2bf28a-8d8c-4d66-bba4-4e997309ba53?utm_source&#x3D;github-enterprise&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr/settings)

[//]: #
'snyk:metadata:{"prId":"d2f243c1-7c13-48b1-a40a-6938d252b366","prPublicId":"d2f243c1-7c13-48b1-a40a-6938d252b366","dependencies":[{"name":"node","from":"18.17","to":"18.18.2"}],"packageManager":"dockerfile","projectPublicId":"7b2bf28a-8d8c-4d66-bba4-4e997309ba53","projectUrl":"https://app.snyk.io/org/pagoda-pilot/project/7b2bf28a-8d8c-4d66-bba4-4e997309ba53?utm_source=github-enterprise&utm_medium=referral&page=fix-pr","type":"user-initiated","patch":[],"vulns":["SNYK-DEBIAN12-LIBWEBP-5893095","SNYK-DEBIAN12-NGHTTP2-5953379"],"upgrade":["SNYK-DEBIAN12-LIBWEBP-5893095","SNYK-DEBIAN12-LIBWEBP-5893095","SNYK-DEBIAN12-LIBWEBP-5893095","SNYK-DEBIAN12-LIBWEBP-5893095","SNYK-DEBIAN12-NGHTTP2-5953379"],"isBreakingChange":false,"env":"prod","prType":"fix","templateVariants":["updated-fix-title","priorityScore"],"priorityScoreList":[852,600],"remediationStrategy":"vuln"}'

---

**Learn how to fix vulnerabilities with free interactive lessons:**

🦉 [Resource
Exhaustion](https://learn.snyk.io/lesson/redos/?loc&#x3D;fix-pr)

Co-authored-by: snyk-bot <[email protected]>
This PR exposes a gRPC endpoint for the Block Streamer service to
provide control over individual Block Streams. There are currently 3
methods: start/stop/list. The functionality across these methods is
quite basic, I didn't want to spend too much time fleshing them out as
I'm still not certain on how exactly they will be used. I expect them to
Change once the Control Plane starts using them.
This PR is a combination of refactoring + testing, the former enabling
the latter

## Mock `impl` instead of `trait`
Originally, `trait`s such as `S3ClientTrait` were used to abstract the
behaviour of I/O related `struct`s, allowing for the injection of mock
implementations via `automock`. This was fine, but lead to very verbose
syntax which needed to be propagated throughout the codebase where ever
it was used.

Now, the `impl` itself is mocked, and either the mock or real
implementation is returned dependant on the `cfg(test)` flag. This means
we no longer need to worry about generics and can instead use the
`struct` itself, as we would with non-mocked code.

The trade-off here is that rust now thinks the 'real' code is
dead/unused as `test` is enabled by default.

## Mocking `near-lake-framework`
`aws` exposes an HTTP client which can be configured with mock
request/response values. `aws-sdk-s3` can be configured to use this
client, creating a mock client. Injecting this config into
`near-lake-framework` allows us to create deterministic outputs,
enabling unit-testing.

`aws_config` is now taken as a parameter where relevant so that we can
mock `near-lake-framework` in out tests. The functions for achieving
this have been placed in `test_utils` so they can be reused.

## Test `BlockStreamerService` and `BlockStream`
With `RedisClient`, `S3Client`, and `Lake` now mockable - tests have
been added for these `struct`s. I've also added a Github action which
runs on every PR. All raw block data has been added to `data/`, which
makes up the majority of the diff in this PR.
This PR enables support for multiple contract filter support.
This PR extract the types exposed by the registry contract in to its own
reusable crate: `registry_types`. This was done to avoid the duplication
across our different service which; fetch the JSON, and then construct
the Rust types.

## `registry_types` crate 
The registry types were already partially shared between Coordinator and
the registry, but I've expanded on this to include all types rather than
just a subset. I've created a new crate, rather than using the existing
`./indexer/indexer_rule_type` crate as some packages needed to be
upgraded, which caused conflicts across the `indexer/` cargo workspace -
which seemed unnecessary to fix given that it is being deprecated.

## Registry contract updates
`near-sdk` version `4.1.1` is no longer compilable because it depends on
a non-existent crate, hence the upgrade to `5.0.0-alpha.1`. Since I was
already making changes here, I decided to remove the migration related
code as it is no longer needed.

## Consuming `registry_types`
With the shared types in place, I've updated `block-streamer/` to use
them, removing the existing types which it duplicates. I also went ahead
and removed all types which were unused (left over from Alertexer).

`indexer_rule_type` depends on `near-sdk` version `4.1.1` (the broken
one), and since it's no longer shared, I removed that dependency so that
it can compile again.

Relates to #421
Adds a button to refresh the logs from indexer logs page.
@morgsmccauley morgsmccauley requested a review from a team as a code owner December 19, 2023 01:26
We have an issue where if the formatting of SQL is not current or we
encounter a case in which we do not support the type generation library
for the context method, we fail very abruptly.

This PR handles these errors better. 

- If sql/code encounters syntax error and formatting fails, we simply
return the unformatted code to the editor so the user can fix it.
- If code + sql both have syntax errors, the user will now see two
different alerts. One for each one of them.
- We throw the error, but we also log everything to console for
debugging purposes.
- We catch and throw type generation errors in a new variable.
@morgsmccauley morgsmccauley merged commit 4c2d5e8 into stable Dec 20, 2023
18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants